- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources5
- Resource Type
-
0004000001000000
- More
- Availability
-
50
- Author / Contributor
- Filter by Author / Creator
-
-
Wattenberg, Martin (5)
-
Li, Kenneth (3)
-
Pfister, Hanspeter (3)
-
Viégas, Fernanda (3)
-
Bau, David (2)
-
Arawjo, Ian (1)
-
Bashkansky, Naomi (1)
-
Chau, Duen Horng (1)
-
Glassman, Elena L (1)
-
Hopkins, Aspen K. (1)
-
Kahng, Minsuk (1)
-
Liu, Tianle (1)
-
Patel, Oam (1)
-
Swoopes, Chelse (1)
-
Thorat, Nikhil (1)
-
Vaithilingam, Priyan (1)
-
Viegas, Fernanda B. (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
System-prompting is a standard tool for customizing language-model chatbots, enabling them to follow a specific instruction. An implicit assumption in the use of system prompts is that they will be stable, so the chatbot will continue to generate text according to the stipulated instructions for the duration of a conversation. We propose a quantitative benchmark to test this assumption, evaluating instruction stability via self-chats between two instructed chatbots. Testing popular models like LLaMA2-chat-70B and GPT-3.5, we reveal a significant instruction drift within eight rounds of conversations. An empirical and theoretical analysis of this phenomenon suggests the transformer attention mechanism plays a role, due to attention decay over long exchanges. To combat attention decay and instruction drift, we propose a lightweight method called split-softmax, which compares favorably against two strong baselines.more » « less
-
Arawjo, Ian; Swoopes, Chelse; Vaithilingam, Priyan; Wattenberg, Martin; Glassman, Elena L (, ACM)
-
Li, Kenneth; Patel, Oam; Viégas, Fernanda; Pfister, Hanspeter; Wattenberg, Martin (, NeurIPS)
-
Li, Kenneth; Hopkins, Aspen K.; Bau, David; Viégas, Fernanda; Pfister, Hanspeter; Wattenberg, Martin (, ICLR)
-
Kahng, Minsuk; Thorat, Nikhil; Chau, Duen Horng; Viegas, Fernanda B.; Wattenberg, Martin (, IEEE Transactions on Visualization and Computer Graphics)
An official website of the United States government

Full Text Available